Motivated by penalized likelihood maximization in complex models, we studyoptimization problems where neither the function to optimize nor its gradienthave an explicit expression, but its gradient can be approximated by a MonteCarlo technique. We propose a new algorithm based on a stochastic approximationof the Proximal-Gradient (PG) algorithm. This new algorithm, named StochasticApproximation PG (SAPG) is the combination of a stochastic gradient descentstep which - roughly speaking - computes a smoothed approximation of the pastgradient along the iterations, and a proximal step. The choice of the step sizeand the Monte Carlo batch size for the stochastic gradient descent step in SAPGare discussed. Our convergence results cover the cases of biased and unbiasedMonte Carlo approximations. While the convergence analysis of the MonteCarlo-PG is already addressed in the literature (see Atchad\'e et al. [2016]),the convergence analysis of SAPG is new. The two algorithms are compared on alinear mixed effect model as a toy example. A more challenging application isproposed on non-linear mixed effect models in high dimension with apharmacokinetic data set including genomic covariates. To our best knowledge,our work provides the first convergence result of a numerical method designedto solve penalized Maximum Likelihood in a non-linear mixed effect model.
展开▼